21 research outputs found

    Modified belief propagation decoders applied to non-CSS QLDGM codes.

    Get PDF
    Quantum technology is becoming increasingly popular, and big companies are starting to invest huge amounts of money to ensure they do not get left behind in this technological race. Presently, qubits and operational quantum channels may be thought of as far-fetched ideas, but in the future, quantum computing will be of critical importance. In this project, it is provided a concise overview of the basics of coding theory and how they can be used in the design of quantum computers. Specifically, Low Density Parity Check (LDPC) codes are focused, as they can be integrated within the stabilizer construction to build effective quantum codes. Following this, it is introduced the specifics of the quantum paradigm and present the most common family of quantum codes: stabilizer codes. Finally, it is explained the codes that have been used in this project, discussing what type of code they are and how they are designed. In this last section, it is also presented the ultimate goal of the project: using modified belief propagation decoders that had previously been tested for QLDPCs, for the proposed non-CSS QLDGM codes of this project

    Combining the Burrows-Wheeler Transform and RCM-LDGM Codes for the Transmission of Sources with Memory at High Spectral Efficiencies

    Get PDF
    In this paper, we look at the problem of implementing high-throughput Joint SourceChannel (JSC) coding schemes for the transmission of binary sources with memory over AWGN channels. The sources are modeled either by a Markov chain (MC) or a hidden Markov model (HMM). We propose a coding scheme based on the Burrows-Wheeler Transform (BWT) and the parallel concatenation of Rate-Compatible Modulation and Low-Density Generator Matrix (RCM-LDGM) codes. The proposed scheme uses the BWT to convert the original source with memory into a set of independent non-uniform Discrete Memoryless (DMS) binary sources, which are then separately encoded, with optimal rates, using RCM-LDGM codes

    On the Performance of Interleavers for Quantum Turbo Codes

    Get PDF
    Quantum turbo codes (QTC) have shown excellent error correction capabilities in the setting of quantum communication, achieving a performance less than 1 dB away from their corresponding hashing bounds. Existing QTCs have been constructed using uniform random interleavers. However, interleaver design plays an important role in the optimization of classical turbo codes. Consequently, inspired by the widely used classical-to-quantum isomorphism, this paper studies the integration of classical interleaving design methods into the paradigm of quantum turbo coding. Simulations results demonstrate that error floors in QTCs can be lowered significantly, while decreasing memory consumption, by proper interleaving design without increasing the overall decoding complexity of the system

    On the combination of multi-cloud and network coding for cost-efficient storage in industrial applications

    Get PDF
    The adoption of both Cyber–Physical Systems (CPSs) and the Internet-of-Things (IoT) has enabled the evolution towards the so-called Industry 4.0. These technologies, together with cloud computing and artificial intelligence, foster new business opportunities. Besides, several industrial applications need immediate decision making and fog computing is emerging as a promising solution to address such requirement. In order to achieve a cost-efficient system, we propose taking advantage from spot instances, a new service offered by cloud providers, which provide resources at lower prices. The main downside of these instances is that they do not ensure service continuity and they might suffer from interruptions. An architecture that combines fog and multi-cloud deployments along with Network Coding (NC) techniques, guarantees the needed fault-tolerance for the cloud environment, and also reduces the required amount of redundant data to provide reliable services. In this paper we analyze how NC can actually help to reduce the storage cost and improve the resource efficiency for industrial applications, based on a multi-cloud infrastructure. The cost analysis has been carried out using both real AWS EC2 spot instance prices and, to complement them, prices obtained from a model based on a finite Markov chain, derived from real measurements. We have analyzed the overall system cost, depending on different parameters, showing that configurations that seek to minimize the storage yield a higher cost reduction, due to the strong impact of storage cost

    Error correction for reliable quantum computing

    Get PDF
    Quantum computers herald the arrival of a new era in which previously intractable computational problems will be solved efficiently. However, quantum technology is held down by decoherence, a phenomenon that is omnipresent in the quantum paradigm and that renders quantum information useless when left unchecked. The science of quantum error correction, a discipline that seeks to combine and protect quantum information from the effects of decoherence using structures known as codes, has arisen to meet this challenge. Stabilizer codes, a particular subclass of quantum codes, have enabled fast progress in the field of quantum error correction by allowing parallels to be drawn with the widely studied field of classical error correction. This has resulted in the construction of the quantum counterparts of well-known capacity-approaching classical codes like sparse codes and quantum turbo codes. However, quantum codes obtained in this manner do not entirely evoke the stupendous error correcting abilities of their classical counterparts. This occurs because classical strategies ignore important differences between the quantum and classical paradigms, an issue that needs to be addressed if quantum error correction is to succeed in its battle with decoherence. In this dissertation we study a phenomenon exclusive to the quantum paradigm, known as degeneracy, and its effects on the performance of sparse quantum codes. Furthermore, we also analyze and present methods to improve the performance of a specific family of sparse quantum codes in various different scenarios.Los ordenadores cuánticos presagian la llegada de una etapa en la cual seremos capaces de abordar problemas computacionales que actualmente están fuera de nuestro alcance. Para que la computación cuántica se convierta en una realidad es imprescindible que previamente se desarrollen estrategias para paliar los efectos de la decoherencia cuántica, un mecanismo físico que corrompe e invalida los resultados producidos por máquinas cuánticas. Es para dicho fin que surge la disciplina conocida como la corrección de errores cuántica, la cual pretende diseñar métodos, conocidos como códigos, que sean capaces de proteger a la información cuántica de los efectos nocivos de la decoherencia cuántica. Los códigos estabilizadores, una familia particular de códigos cuánticos, han habilitado un rápido progreso en el campo de la corrección de errores cuántica ya que permiten diseñar códigos cuánticos a partir de estrategias de corrección de errores clásicas. Esto ha resultado en la aparición de familias de códigos cuánticos equivalentes a sus versiones clásicas, entre las cuales destacan los códigos cuánticos turbo y los códigos cuánticos sparse. Sin embargo, dichos códigos cuánticos no obtienen el mismo rendimiento que sus equivalentes clásicos cuando se aplican en el nuevo paradigma. Esto se debe a que el diseño de estrategias de corrección cuántica de errores a partir de códigos clásicos ignora diferencias importantes entre los entornos de comunicaciones cuánticos y clásicos y es algo que debe solucionarse para combatir la decoherencia cuántica con éxito. Por ello, en esta tesis doctoral estudiamos el fenómeno cuántico de la degeneración y el impacto que este tiene sobre el rendimiento de los códigos cuánticos sparse. Además, también analizamos y proponemos metodologías para mejorar el rendimiento de una familia en particular de códigos cuánticos sparse

    Decoherence and quantum error correction for quantum computing and communications.

    Get PDF
    Quantum technologies have shown immeasurable potential to effectively solve several information processing tasks such as prime number factorization, unstructured database search or complex macromolecule simulation. As a result of such capability to solve certain problems that are not classically tractable, quantum machines have the potential revolutionize the modern world via applications such as drug design, process optimization, unbreakable communications or machine learning. However, quantum information is prone to suffer from errors caused by the so-called decoherence, which describes the loss in coherence of quantum states associated to their interactions with the surrounding environment. This decoherence phenomenon is present in every quantum information task, be it transmission, processing or even storage of quantum information. Consequently, the protection of quantum information via quantum error correction codes (QECC) is of paramount importance to construct fully operational quantum computers. Understanding environmental decoherence processes and the way they are modeled is fundamental in order to construct effective error correction methods capable of protecting quantum information. In this thesis, the nature of decoherence is studied and mathematically modelled; and QECCs are designed and optimized so that they exhibit better error correction capabilities.Las tecnologías cuánticas presentan un enorme potencial para resolver, de una forma eficiente, tareas de procesado de información tales como la factorización en números primos, la búsqueda en bases de datos no estructuradas o simulación de macromoléculas complejas. Como resultado de esta capacidad para resolver problemas que no se pueden tratar con medios clásicos, las máquinas cuánticas tienen el potencial de revolucionar el mundo moderno mediante aplicaciones como el diseño de fármacos, la optimización de procesos, las comunicaciones completamente seguras o el quantum machine learning. Sin embargo, la información cuántica tiende a sufrir errores producidos por la interacción denominada decoherencia cuántica. La decoherencia cuántica describe la pérdida de coherencia, y así de la información, de los estados cuánticos asociada a la inevitable interacción de estos con su entorno. Este efecto está presente en todas las tareas de procesado cuántico de la información, sea la transmisión, el procesado o el almacenamiento de la información cuántica. Como consecuencia, la protección de los estados cuánticos mediante los denominados códigos correctores de errores cuánticos es de vital importancia para poder construir ordenadores cuánticos que sean fiables. Entender los procesos físicos que constituyen la decoherencia cuántica y su modelado teórico es fundamental para poder construir métodos de corrección de errores cuánticos de forma eficiente. En esta tesis doctoral se abarcan el modelado matemático de la decoherencia cuántica; y el diseño y optimización de códigos correctores de errores para obtener mejores rendimientos en dicha tarea.Teknologia kuantikoak, konputazionalki konplexuak diren eta egungo teknologien bidez ebatzi ezin daitezkeen arazoei aurre egiteko aukera eskaintzen du. Adibidez, modu eraginkor batean zenbakiak beren osagai lehenetan faktorizatzeko, datu-base desegituratuetan bilaketak burutzeko edota makromolekula konplexuak simulatzeko aukera eskaintzen du. Ondorioz, konputazio kuantikoa gizartearen eta zientziaren aurrerapenean ezinbesteko tresna bilakatu daiteke. Besteak beste, botika diseinuan, finantza-krisien aurreikustean, konputagailu sareen segurtasuna sendotzean edo genomen sekuentziazioan aplikatu daiteke teknologia kuantikoa. Hala ere, egungo teknologiaren bitartez oraindik ezinezkoa da konputazio kuantikoak eskaini ditzaken aukera guztiak burutzeko gai den konputagailu kuantikoa eraikitzea. Informazio kuantikoak erroreak jasateko duen joerak sorturiko fidagarritasun falta da ezgaitasun horren kausa. Errore horiek, sistema kuantikoek euren ingurumenarekin dituzten interakzioen ondorio dira. Prozesu fisiko horien multzoari dekoherentzia deritzaio eta teknologia kuantikoen zeregin guztietan ageri da. Hortaz, informazio kuantikoa errore-zuzentze kodeen bidez babestea beharrezkoa da era zuzenean funtzionatzeko ahalmena duten konputagailu kuantikoak eraiki ahal izateko. Kode horiek modu eraginkor batean sortu ahal izateko, dekoherentzia prozesuak ulertzea eta matematikoki modelatzea funtsezkoa da. Tesi honetan dekoherentziaren modelatze matematikoa eta errore-zuzentze kode kuantikoen optimizazioa ikertu ditugu

    Error correction for reliable quantum computing

    No full text
    Quantum computers herald the arrival of a new era in which previously intractable computational problems will be solved efficiently. However, quantum technology is held down by decoherence, a phenomenon that is omnipresent in the quantum paradigm and that renders quantum information useless when left unchecked. The science of quantum error correction, a discipline that seeks to combine and protect quantum information from the effects of decoherence using structures known as codes, has arisen to meet this challenge. Stabilizer codes, a particular subclass of quantum codes, have enabled fast progress in the field of quantum error correction by allowing parallels to be drawn with the widely studied field of classical error correction. This has resulted in the construction of the quantum counterparts of well-known capacity-approaching classical codes like sparse codes and quantum turbo codes. However, quantum codes obtained in this manner do not entirely evoke the stupendous error correcting abilities of their classical counterparts. This occurs because classical strategies ignore important differences between the quantum and classical paradigms, an issue that needs to be addressed if quantum error correction is to succeed in its battle with decoherence. In this dissertation we study a phenomenon exclusive to the quantum paradigm, known as degeneracy, and its effects on the performance of sparse quantum codes. Furthermore, we also analyze and present methods to improve the performance of a specific family of sparse quantum codes in various different scenarios.Los ordenadores cuánticos presagian la llegada de una etapa en la cual seremos capaces de abordar problemas computacionales que actualmente están fuera de nuestro alcance. Para que la computación cuántica se convierta en una realidad es imprescindible que previamente se desarrollen estrategias para paliar los efectos de la decoherencia cuántica, un mecanismo físico que corrompe e invalida los resultados producidos por máquinas cuánticas. Es para dicho fin que surge la disciplina conocida como la corrección de errores cuántica, la cual pretende diseñar métodos, conocidos como códigos, que sean capaces de proteger a la información cuántica de los efectos nocivos de la decoherencia cuántica. Los códigos estabilizadores, una familia particular de códigos cuánticos, han habilitado un rápido progreso en el campo de la corrección de errores cuántica ya que permiten diseñar códigos cuánticos a partir de estrategias de corrección de errores clásicas. Esto ha resultado en la aparición de familias de códigos cuánticos equivalentes a sus versiones clásicas, entre las cuales destacan los códigos cuánticos turbo y los códigos cuánticos sparse. Sin embargo, dichos códigos cuánticos no obtienen el mismo rendimiento que sus equivalentes clásicos cuando se aplican en el nuevo paradigma. Esto se debe a que el diseño de estrategias de corrección cuántica de errores a partir de códigos clásicos ignora diferencias importantes entre los entornos de comunicaciones cuánticos y clásicos y es algo que debe solucionarse para combatir la decoherencia cuántica con éxito. Por ello, en esta tesis doctoral estudiamos el fenómeno cuántico de la degeneración y el impacto que este tiene sobre el rendimiento de los códigos cuánticos sparse. Además, también analizamos y proponemos metodologías para mejorar el rendimiento de una familia en particular de códigos cuánticos sparse

    Design of High Rate RCM-LDGM Codes

    No full text
    This master thesis is studies the design of High Rate RCM-LDGM codes and it is divided in two parts: In the rst part, it proposes an EXIT chart analysis and a Bit Error Rate (BER) prediction procedure suitable for implementing high rate codes based on the parallel concatenation of a Rate Compatible Modulation (RCM) code with a Low Density Generator Matrix (LDGM) code. The decoder of a parallel RCMLDGM code is based on the iterative Sum-Product algorithm which exchange information between variable nodes (VN) and the corresponding two types of check nodes: RCM-CN and LDGM-CN. To obtain good codes that achieve near Shannon limit performance one is required to obtain BER versus SNR behaviors for di erent families of possible code design parameters. For large input block lengths, this could take large amount of simulation time. To overcome this design drawback, the proposed EXIT-BER chart procedure predicts in a much faster way this BER versus SNR behavior, and consequently speeds up their design procedure. In the second part, it studies two di erent strategies for transmitting sources with memory. The rst strategy consists on exploiting the source statistics in the decoding process by attaching the factor graph of the source to the RCMLDGM one and running the Sum-Product Algorithm to the entire factor graph. On the other hand, the second strategy uses the Burrows-Wheeler Transform to convert the source with memory into several independent Discrete Memoryless (DMS) binary Sources and encodes them separately

    Comprehensive behavioural model for an electronic amplifier used in an optical communications systems

    Get PDF
    Correctly modelling the behaviour of power ampli ers to include the nonlinear aspects of their response has been a matter of interest in the elds of electronic and wireless communications for a considerable amount of time. Throughout the last decades, numerous techniques that succeed in accurately representing these nonlinearities and that provide ways to reduce their impact on system performance have been derived. An essential application of these techniques is the construction of ampli er models that can be used to obtain a deeper understanding of the phenomena that make these devices more or less nonlinear. These models have been used in numerous scenarios to comprehend and devise ways to mitigate the impact of nonlinearity in the response of many ampli ers. However, ultra-broadband RF electronic drive ampli ers used in optical communications environments represent a small niche in which extensive derivation of these ampli er models has not yet been accomplished. In this thesis, a device-speci c behavioural model for one such electronic drive ampli er will be constructed based on existing modelling techniques and extensive lab measurements. In addition, changes in the performance of a communications system when these behavioural models are integrated within it will also be studied. Ultimately, this thesis will strive to determine the speci c phenomena that are linked to the nonlinearity present in the response of an ultra-broadband RF electronic drive ampli er and how variation of these factors a ects the behaviour of this device

    Design of High Rate RCM-LDGM Codes

    Get PDF
    This master thesis is studies the design of High Rate RCM-LDGM codes and it is divided in two parts: In the rst part, it proposes an EXIT chart analysis and a Bit Error Rate (BER) prediction procedure suitable for implementing high rate codes based on the parallel concatenation of a Rate Compatible Modulation (RCM) code with a Low Density Generator Matrix (LDGM) code. The decoder of a parallel RCMLDGM code is based on the iterative Sum-Product algorithm which exchange information between variable nodes (VN) and the corresponding two types of check nodes: RCM-CN and LDGM-CN. To obtain good codes that achieve near Shannon limit performance one is required to obtain BER versus SNR behaviors for di erent families of possible code design parameters. For large input block lengths, this could take large amount of simulation time. To overcome this design drawback, the proposed EXIT-BER chart procedure predicts in a much faster way this BER versus SNR behavior, and consequently speeds up their design procedure. In the second part, it studies two di erent strategies for transmitting sources with memory. The rst strategy consists on exploiting the source statistics in the decoding process by attaching the factor graph of the source to the RCMLDGM one and running the Sum-Product Algorithm to the entire factor graph. On the other hand, the second strategy uses the Burrows-Wheeler Transform to convert the source with memory into several independent Discrete Memoryless (DMS) binary Sources and encodes them separately
    corecore